-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
libtorch: new recipe #24759
base: master
Are you sure you want to change the base?
libtorch: new recipe #24759
Conversation
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
XNNPACK was not correctly added to project dependencies. Prefer namespaced targets, if possible.
This comment has been minimized.
This comment has been minimized.
Hooks produced the following warnings for commit 87a1370libtorch/2.4.0@#f680755600363ae5e29186ad5b798792
|
Conan v1 pipeline ❌Failure in build 6 (
Note: To save resources, CI tries to finish as soon as an error is found. For this reason you might find that not all the references have been launched or not all the configurations for a given reference. Also, take into account that we cannot guarantee the order of execution as it depends on CI workload and workers availability. Conan v2 pipeline ❌
The v2 pipeline failed. Please, review the errors and note this is required for pull requests to be merged. In case this recipe is still not ported to Conan 2.x, please, ping Failure in build 8 (
Note: To save resources, CI tries to finish as soon as an error is found. For this reason you might find that not all the references have been launched or not all the configurations for a given reference. Also, take into account that we cannot guarantee the order of execution as it depends on CI workload and workers availability. |
Hello @valgur, thanks for this amazing PR. Do you plan to continue working on it? 🤞Having libtorch in Conan would be so neat. Since OpenMPI is now available, do you plan to let the user to enable the distributed feature? |
tc.variables["BLAS"] = self._blas_cmake_option_value | ||
|
||
tc.variables["MSVC_Z7_OVERRIDE"] = False | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incidentally, this also needs
tc.variables["CMAKE_CXX_EXTENSIONS"] = True
Tested this while running a build that uses a compiler.cppstd. If it is using a non-gnu standard (which for other packages it must be) ATen breaks with the same error as: pytorch/QNNPACK#67
This converts -std=c++17 for example to -std=gnu++17.
It's probably not necessary on Windows but also shouldn't hurt
whole_archive = f"-WHOLEARCHIVE:{lib_fullpath}" | ||
else: | ||
lib_fullpath = os.path.join(lib_folder, f"lib{libname}.a") | ||
whole_archive = f"-Wl,--whole-archive,{lib_fullpath},--no-whole-archive" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your work on this PR--I am not using this library directly but found it through following some github issues on whole archive linking.
For this line--I wonder if it is possible to do this with -Wl,--push-state,--pop-state
? See eg
https://cmake.org/cmake/help/latest/variable/CMAKE_LANG_LINK_LIBRARY_USING_FEATURE.html#loading-a-whole-static-library
Summary
Changes to recipe: libtorch/2.4.0
Motivation
Tensors and Dynamic neural networks in Python with strong GPU acceleration.
https://github.com/pytorch/pytorch
Details
Continues from #5100 by @SpaceIm.
CUDA, HIP and SYCL backends are currently disabled since the PR is complex enough already and these can be addressed in a follow-up PR. Vulkan and Metal (TODO) should be usable as GPU backends currently.
Distributed feature is disabled as well to limit the scope and due to
openmpi
not yet being available (#18980).Android and iOS builds are probably broken and need testing.
Non-OpenBLAS BLAS backends are probably not usable due to OpenBLAS being required for LAPACK. A separate LAPACK recipe would be required to fix that (such as #23798).
Closes #6861.
TODO:
pocketfft
and unvendor.